social problem
Large Language Models show both individual and collective creativity comparable to humans
Sun, Luning, Yuan, Yuzhuo, Yao, Yuan, Li, Yanyan, Zhang, Hao, Xie, Xing, Wang, Xiting, Luo, Fang, Stillwell, David
Artificial intelligence has, so far, largely automated routine tasks, but what does it mean for the future of work if Large Language Models (LLMs) show creativity comparable to humans? To measure the creativity of LLMs holistically, the current study uses 13 creative tasks spanning three domains. We benchmark the LLMs against individual humans, and also take a novel approach by comparing them to the collective creativity of groups of humans. We find that the best LLMs (Claude and GPT-4) rank in the 52nd percentile against humans, and overall LLMs excel in divergent thinking and problem solving but lag in creative writing. When questioned 10 times, an LLM's collective creativity is equivalent to 8-10 humans. When more responses are requested, two additional responses of LLMs equal one extra human. Ultimately, LLMs, when optimally applied, may compete with a small group of humans in the future of work.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- Asia > China > Beijing > Beijing (0.04)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- (2 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study > Negative Result (0.45)
General Automatic Solution Generation of Social Problems
Niu, Tong, Huang, Haoyu, Du, Yu, Zhang, Weihao, Shi, Luping, Zhao, Rong
Given the escalating intricacy and multifaceted nature of contemporary social systems, manually generating solutions to address pertinent social issues has become a formidable task. In response to this challenge, the rapid development of artificial intelligence has spurred the exploration of computational methodologies aimed at automatically generating solutions. However, current methods for auto-generation of solutions mainly concentrate on local social regulations that pertain to specific scenarios. Here, we report an automatic social operating system (ASOS) designed for general social solution generation, which is built upon agent-based models, enabling both global and local analyses and regulations of social problems across spatial and temporal dimensions. ASOS adopts a hypergraph with extensible social semantics for a comprehensive and structured representation of social dynamics. It also incorporates a generalized protocol for standardized hypergraph operations and a symbolic hybrid framework that delivers interpretable solutions, yielding a balance between regulatory efficacy and function viability. To demonstrate the effectiveness of ASOS, we apply it to the domain of averting extreme events within international oil futures markets. By generating a new trading role supplemented by new mechanisms, ASOS can adeptly discern precarious market conditions and make front-running interventions for non-profit purposes. This study demonstrates that ASOS provides an efficient and systematic approach for generating solutions for enhancing our society.
- Asia > China (0.30)
- Europe > Germany (0.28)
- North America > United States > New York (0.14)
- (4 more...)
- Energy > Oil & Gas > Trading (1.00)
- Banking & Finance > Trading (1.00)
Apps Are Rushing to Add AI. Is Any of It Useful?
Ever since the ChatGPT API opened up, all sorts of apps have been strapping on AI functionality. I've personally noticed this a lot in email clients: Apps like Spark and Canary are prominently bragging about their built-in AI functionality. The most common features will write replies for you, or even generate an entire email using only a prompt. Some will summarize a long email in your inbox or even a thread. It's a great idea in the abstract, but I think integrations like these conspire to make communication less efficient instead of more efficient. You should feel free to try such features--they're fun!--but don't expect them to change your life.
- Information Technology > Communications > Social Media (0.32)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.30)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.30)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.30)
AI Expert: We Should Stop Using So Much AI
Meredith Broussard is unusually well placed to dissect the ongoing hype around AI. She's a data scientist and associate professor at New York University, and she's been one of the leading researchers in the field of algorithmic bias for years. And though her own work leaves her buried in math problems, she's spent the last few years thinking about problems that mathematics can't solve. Her reflections have made their way into a new book about the future of AI. In More than a Glitch, Broussard argues that we are consistently too eager to apply artificial intelligence to social problems in inappropriate and damaging ways. Her central claim is that using technical tools to address social problems without considering race, gender, and ability can cause immense harm.
Meet the AI expert who says we should stop using AI so much
Broussard has also recently recovered from breast cancer, and after reading the fine print of her electronic medical records, she realized that an AI had played a part in her diagnosis--something that is increasingly common. That discovery led her to run her own experiment to learn more about how good AI was at cancer diagnostics. We sat down to talk about what she discovered, as well as the problems with the use of technology by police, the limits of "AI fairness," and the solutions she sees for some of the challenges AI is posing. The conversation has been edited for clarity and length. At the beginning of the pandemic, I was diagnosed with breast cancer.
Associations Between Natural Language Processing (NLP) Enriched Social Determinants of Health and Suicide Death among US Veterans
Mitra, Avijit, Pradhan, Richeek, Melamed, Rachel D, Chen, Kun, Hoaglin, David C, Tucker, Katherine L, Reisman, Joel I, Yang, Zhichao, Liu, Weisong, Tsai, Jack, Yu, Hong
Importance: Social determinants of health (SDOH) are known to be associated with increased risk of suicidal behaviors, but few studies utilized SDOH from unstructured electronic health record (EHR) notes. Objective: To investigate associations between suicide and recent SDOH, identified using structured and unstructured data. Design: Nested case-control study. Setting: EHR data from the US Veterans Health Administration (VHA). Participants: 6,122,785 Veterans who received care in the US VHA between October 1, 2010, and September 30, 2015. Exposures: Occurrence of SDOH over a maximum span of two years compared with no occurrence of SDOH. Main Outcomes and Measures: Cases of suicide deaths were matched with 4 controls on birth year, cohort entry date, sex, and duration of follow-up. We developed an NLP system to extract SDOH from unstructured notes. Structured data, NLP on unstructured data, and combining them yielded six, eight and nine SDOH respectively. Adjusted odds ratios (aORs) and 95% confidence intervals (CIs) were estimated using conditional logistic regression. Results: In our cohort, 8,821 Veterans committed suicide during 23,725,382 person-years of follow-up (incidence rate 37.18/100,000 person-years). Our cohort was mostly male (92.23%) and white (76.99%). Across the five common SDOH as covariates, NLP-extracted SDOH, on average, covered 80.03% of all SDOH occurrences. All SDOH, measured by structured data and NLP, were significantly associated with increased risk of suicide. The SDOH with the largest effects was legal problems (aOR=2.66, 95% CI=.46-2.89), followed by violence (aOR=2.12, 95% CI=1.98-2.27). NLP-extracted and structured SDOH were also associated with suicide. Conclusions and Relevance: NLP-extracted SDOH were always significantly associated with increased risk of suicide among Veterans, suggesting the potential of NLP in public health studies.
- North America > United States > Massachusetts > Middlesex County > Lowell (0.15)
- North America > United States > Massachusetts > Hampshire County > Amherst (0.14)
- North America > United States > Connecticut > Tolland County > Storrs (0.14)
- (5 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
Can Artificial Intelligence ethically improve society?
The Three Laws of Robotics, which state that a robot cannot harm or allow a human to come into harm, must obey orders given by human beings, and must protect its own existence, were set out by Isaac Asimov 80 years ago, long before Artificial Intelligence became a reality. Despite this, they are still able to illustrate how humans have dealt with the ethical challenges of technology by protecting the users. Ethical challenges associated with technology are not inherently about the technology itself, but rather are a social problem. Technology, therefore, and in particular, Artificial Intelligence, could be used to empower users and help us build a more ethical society. This approach, put forward in the article, 'Ethical Idealism, Technology and Practice: a Manifesto,' will help us utilise technology for the betterment of society. There has long been the fear that humans would succeed in making machines so intelligent that they would end up rebelling against their creators.
For a Second There, Someone Thought Using Taser Drones to Stop School Shootings Was a Good Idea
Armed police couldn't stop the shooters in Buffalo and in Uvalde. But perhaps a very small drone equipped with a Taser could. Specifically, Axon CEO Rick Smith said in a Thursday announcement, "non-lethal drones capable of incapacitating an active shooter in less than 60 seconds" (or so the press release goes), which would be stationed inside of schools. At the push of a panic button, a trained human pilot at a control center elsewhere in the country would launch a drone. With the help of a network of security cameras, they would try to target the drone's onboard Taser probes into the shooter's flesh, in the hope of keeping them down until police could arrive on the scene.
- North America > United States > Texas > Uvalde County > Uvalde (0.26)
- North America > United States > Florida (0.04)
- North America > United States > Arizona (0.04)
- (4 more...)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
- Education > Health & Safety > School Safety & Security > School Violence (1.00)
Futures of Digital Governance
Urs Gasser (ugasser@cyber.harvard.edu) is the Dean of the new TUM School of Social Sciences and Technology at the Technical University of Munich, Germany, and a Faculty Director of the Berkman Klein Center for Internet & Society at Harvard University, Cambridge, MA, USA. Virgílio Almeida (virgilio@dcc.ufmg.br) is a Professor Emeritus of Computer Science at the Federal University of Minas Gerais (UFMG), Brazil, and a Faculty Associate at the Berkman Klein Center for Internet & Society at Harvard University, Cambridge, MA, USA.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.45)
- South America > Brazil > Minas Gerais (0.24)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.24)
- (2 more...)
- Government (1.00)
- Law (0.70)
A deeper look into the impact of new technologies on our work
But before delving into'behind-the-scenes' of US banking industry meeting ATM, let's turn back time for a second -- on March 27th, 1998, in the New Tech 1998 conference in Denver, Colorado. Here, Neil Postman, a prominent American cultural critic and professor at New York University, gave a keynote lecture. Professor Postman has been a long-time scholar of how new technologies relate to human society, and the book'Amusing Ourselves to Death', a 1985 book that rose to stardom, shows how television technology is destroying public discourse and turning everything into entertainment. I think it has something to do with how we feel about the impact of today's media and how our lives exposed to it are deteriorating. Since this book, Professor Postman has strongly criticized the tendency to respond to all social problems through technical solutions.